-
Notifications
You must be signed in to change notification settings - Fork 21.4k
triedb/pathdb: make batch with pre-allocated size #32914
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM
switch b.typ { | ||
case typeStateHistory: | ||
size = estimatedStateHistoryIndexSize | ||
case typeTrienodeHistory: | ||
size = estimatedTrienodeHistoryIndexSize |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no default here? is using 0 future-proof?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tracked the use, but I didn't go all the way down. I see pebble uses the size to do something like mallocgc
, and then does slice arithmetic on it. I would return a save value just to be sure it doesn't panic.,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the default here. It's only possible to have state history or trienode history, we can panic for unknown type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
k, I was hoping for a warning, but I agree that this is not a concern right now. It's just the future-proofiness I was considering. Nvm I'll merge.
In this PR, the database batch for writing the history index data is pre-allocated.
It's observed that database batch repeatedly grows the size of the mega-batch,
causing significant memory allocation pressure. This approach can effectively
mitigate the overhead.